553 research outputs found

    An observational study of children interacting with an augmented story book

    Get PDF
    We present findings of an observational study investigating how young children interact with augmented reality story books. Children aged between 6 and 7 read and interacted with one of two story books aimed at early literacy education. The books pages were augmented using animated virtual 3D characters, sound, and interactive tasks. Introducing novel media to young children requires system and story designers to consider not only technological issues but also questions arising from story design and the design of interactive sequences. We discuss findings of our study and implications regarding the implementation of augmented story books

    AR Tennis

    Get PDF
    Modern mobile phones combine a display and processing power with a camera, and so are ideal platforms for augmented reality (AR), the overlay of computer graphics on the real world. Henrysson [2] has ported the popular ARToolKit [1] computer vision library to the Symbian operating system which allows developers to build AR applications that run on a mobile phone

    Local Descriptor by Zernike Moments for Real-time Keypoint Matching

    Get PDF
    This paper presents a real-time keypoint matching algorithm using a local descriptor derived by Zernike moments. From an input image, we find a set of keypoints by using an existing corner detection algorithm. At each keypoint we extract a fixed size image patch and compute a local descriptor derived by Zernike moments. The proposed local descriptor is invariant to rotation and illumination changes. In order to speed up the computation of Zernike moments, we compute the Zernike basis functions in advance and store them in a set of lookup tables. The matching is performed with an Approximate Nearest Neighbor (ANN) method and refined by a RANSAC algorithm. In the experiments we confirmed that videos of frame size 320×240 with the scale, rotation, illumination and even 3D viewpoint changes are processed at 25~30Hz using the proposed method. Unlike existing keypoint matching algorithms, our approach also works in realtime for registering a reference image

    Handheld AR for Collaborative Edutainment

    Get PDF
    Handheld Augmented Reality (AR) is expected to provide ergonomic, intuitive user interfaces for untrained users. Yet no comparative study has evaluated these assumptions against more traditional user interfaces for an education task. In this paper we compare the suitability of a handheld AR arts-history learning game against more traditional variants. We present results from a user study that demonstrate not only the effectiveness of AR for untrained users but also its fun-factor and suitability in environments such as public museums. Based on these results we provide design guidelines that can inform the design of future collaborative handheld AR applications

    Through the Looking Glass: The Use of Lenses as an Interface Tool for Augmented Reality

    Get PDF
    Stephen N. Spencer The University of Washington Program Chairs Alan Chalmers Hock Soon Seah Publisher ACM Press New York, NY, US

    Transitional Interface: Concept, Issues and Framework

    Get PDF
    Transitional Interfaces have emerged as a new way to interact and collaborate between different interaction spaces such as Reality, Virtual Reality and Augmented Reality. In this paper we explore this concept further. We introduce a descriptive model of the concept, its collaborative aspect and how it can be generalized to describe natural and continuous transitions between contexts (e.g. across space, scale, viewpoint, and representation)

    Collaborating with a Mobile Robot: An Augmented Reality Multimodal Interface

    Get PDF
    Invited paperWe have created an infrastructure that allows a human to collaborate in a natural manner with a robotic system. In this paper we describe our system and its implementation with a mobile robot. In our prototype the human communicates with the mobile robot using natural speech and gestures, for example, by selecting a point in 3D space and saying “go here” or “go behind that”. The robot responds using speech so the human is able to understand its intentions and beliefs. Augmented Reality (AR) technology is used to facilitate natural use of gestures and provide a common 3D spatial reference for both the robot and human, thus providing a means for grounding of communication and maintaining spatial awareness. This paper first discusses related work then gives a brief overview of AR and its capabilities. The architectural design we have developed is outlined and then a case study is discussed

    Evaluating the Augmented Reality Human-Robot Collaboration System

    Get PDF
    This paper discusses an experimental comparison of three user interface techniques for interaction with a mobile robot located remotely from the user. A typical means of operating a robot in such a situation is to teleoperate the robot using visual cues from a camera that displays the robot’s view of its work environment. However, the operator often has a difficult time maintaining awareness of the robot in its surroundings due to this single ego-centric view. Hence, a multi-modal system has been developed that allows the remote human operator to view the robot in its work environment through an Augmented Reality (AR) interface. The operator is able to use spoken dialog, reach into the 3D graphic representation of the work environment and discuss the intended actions of the robot to create a true collaboration. This study compares the typical ego-centric driven view to two versions of an AR interaction system for an experiment remotely operating a simulated mobile robot. One interface provides an immediate response from the remotely located robot. In contrast, the Augmented Reality Human-Robot Collaboration (AR-HRC) System interface enables the user to discuss and review a plan with the robot prior to execution. The AR-HRC interface was most effective, increasing accuracy by 30% with tighter variation, while reducing the number of close calls in operating the robot by factors of ~3x. It thus provides the means to maintain spatial awareness and give the users the feeling they were working in a true collaborative environment

    Magnetic Flux Braiding: Force-Free Equilibria and Current Sheets

    Get PDF
    We use a numerical nonlinear multigrid magnetic relaxation technique to investigate the generation of current sheets in three-dimensional magnetic flux braiding experiments. We are able to catalogue the relaxed nonlinear force-free equilibria resulting from the application of deformations to an initially undisturbed region of plasma containing a uniform, vertical magnetic field. The deformations are manifested by imposing motions on the bounding planes to which the magnetic field is anchored. Once imposed the new distribution of magnetic footpoints are then taken to be fixed, so that the rest of the plasma must then relax to a new equilibrium configuration. For the class of footpoint motions we have examined, we find that singular and nonsingular equilibria can be generated. By singular we mean that within the limits imposed by numerical resolution we find that there is no convergence to a well-defined equilibrium as the number of grid points in the numerical domain is increased. These singular equilibria contain current "sheets" of ever-increasing current intensity and decreasing width; they occur when the footpoint motions exceed a certain threshold, and must include both twist and shear to be effective. On the basis of these results we contend that flux braiding will indeed result in significant current generation. We discuss the implications of our results for coronal heating.Comment: 13 pages, 12 figure

    An Evaluation of an Augmented Reality Multimodal Interface Using Speech and Paddle Gestures

    Get PDF
    This paper discusses an evaluation of an augmented reality (AR) multimodal interface that uses combined speech and paddle gestures for interaction with virtual objects in the real world. We briefly describe our AR multimodal interface architecture and multimodal fusion strategies that are based on the combination of time-based and domain semantics. Then, we present the results from a user study comparing using multimodal input to using gesture input alone. The results show that a combination of speech and paddle gestures improves the efficiency of user interaction. Finally, we describe some design recommendations for developing other multimodal AR interfaces
    • 

    corecore